Go to other chapters or to other branches of math
program to evaluate a determinant
Additional Material for this Chapter
Exercises for this Chapter
Answers to Exercises

Chapter 4
Determinants, Adjoints, Inverses

In this chapter all matrices will be square and have only integer entries. The latter restriction will avoid round-off errors in computations. For applications to the plane and space, most matrices will be 2x2 or 3x3. But computations involving 3x3 matrices can be involved and computer assistance may be sometimes more convenient. Most of the computer programs are valid for matrices of sizes up to 12x12.

Section 1:   Determinants

Much of the discussion here is a review of parts of discussions of linearly independent vector arrays introduced in an earlier volume. Here the focus is on the algebraic aspect of determinants.

A number will be associated with each square matrix. If all the entries in the matrix are integer, rational or real, that number will be integer, rational or real respectively. Replace the big curved parentheses on both sides of a square matrix by vertical lines then the matrix becomes a determinant. The associated number is called the value of the determinant.

For example, if M is the following 2x2 matrix then its determinant det M is shown after the matrix:

To evaluate the determinant det M is to calculate the value of det M. Draw the two diagonals. From the product of entries along the blue diagonal subtract the product of entries along the red diagonal:

Click here to see more examples of evaluation of determinants of 2x2 matrices.


GIven below is a 3x3 matrix M. Its determinant det M follows it.
To evaluate the determinant of a 3x3 matrix is more involved. Six diagonals are needed. To use them the first two columns of the determinant are copied to the right of the determinant. Then three blue diagonals and the three red diagonals are drawn as shown. From the sum of the products of the entries on the blue diagonals subtract the sum of the products of the entries on the red diagonals.
Click here to see more examples of evaluation of determinants of 3x3 matrices.

The above methods of using diagonals drawn through the determinants are called cross hatch methods. These methods can be used to support the following "official" definition of the evaluation of 2x2 and 3x3 determinants.

[1.1] (Evaluating determinants)

Click here for further details of evaluating all determinants.

[1.2] (Determinants of transposes) The determinants of a square matrix and its transpose are equal.
Notation: if M is a square matrix and Mt is its transpose, then det M = det Mt.

Example. If M is the following matrix, Mt is its transpose and the values of both determinants equal -7.

The following is the transpose and its evaluation produced by the cross hatch method:

Compare this expansion with that given in [1.1] for the 3x3 determinant. Rearranging negative terms shows equality.

The proof for 2x2 matrices is very simple and left to the reader. This theorem is true for square matrices of all sizes.

[1.3] (Row or column of zeros) If an entire row or an entire column of a matrix has only zeros, then the value of the determinant of that matrix is zero.
Notation: if square matrix M has a row of zeros then det M = 0. If M has a column of zeros then det M = 0.

The statement is trivial for 2x2 determinants. Consider now the 3x3 determinant as shown in [1.1]. Suppose the first column is all zeros. This means that all the αs are zeros:

α1 = 0,    α2 = 0,    α3 = 0
But this letter α appears in every term of the expansion. Therefore, every term in the expansion is zero. Hence the value of the determinant is zero.
A similar situation exists if the second column or the third column if all the entries are zeros. This means that all the βs or all the γs are zeros. But βs and γs also appear in every term of the expansion
If the 3x3 determinant has a row of zeros, then its transpose has a column of zeros. The value of the transpose determinant is zero. By [1.2] then the value of the original determinant is zero.


[1.4] (Sign checkerboard patterns)In the determinants the location of each entree is given a location sign according to the following patterns:

The main diagonal consists only of + signs. The signs alternate betwen + and -, horizontally and vertically. These signs are independent of whether the entries themselves are positive or negative. (However the rule of product of signs is still true, for example, two negatives make a positive.)

In the determinants in [1.1] all entries in the same column have the same letter, and all entries in the same row have the same subscript. In the 3x3 determinant, for any entree, if all the entries in that column are deleted and all the entries in that row are deleted, a 2x2 determinant is created. That smaller determinant is called the minor of of that entree (actually the minor of the location of that entree). The following shows the construction of the minors of α1 and γ2:

The minors are determinants, and their values can be obtained using the methods discussed above for 2x2 determinants.

[1.5] (Evaluation of a 3x3 determinant by expansion by minors) Choose any column (or row), form products of all the entries in that column (or row) and their minors. Before each product place the sign of the location of the entree according to [1.4]. The sums and products all together equal the original 3x3 determinant.

Examples: Expansion along the first column (C1 = E) and expansion along the second row (R2 = E)

For a discussion supporting and deriving the expansions click here.

The expansion by minors presents another way of evaluating a 3x3 determinant. In general the computation is not reduced except in the case where a column or row contains two zeros. For example the evaluation of the following 3x3 determinant is easily done by expansion along the second column. The two zero coefficients mean that two 2x2 determinants need not be evaluated.

There are methods for replacing all entries in a column or row by zeros without changing the value of the determinant. Then the expansion by minors can be done on that column or row.

Theorem [1.3] can easily be proven by expansion of minors along the row or column of zeros. Other facts about 3x3 determinants that can be proven by showing it is true for any 2x2 determinant and then for 3x3 determinants by expansion by minors:

[1.6] For any determinant
  (a) If two rows or two columns are identical, then the value of the determinant is zero;
  (b) If two rows or two columns are interchanged then the value of the determinant is negated;
  (c) If a row or column is multiplied by a number, then the value of the determinant is multiplied by that number;

Actually (b) can be used to prove (a). Simply interchange the identical rows or identical columns.

It can be proven that expansion by minors can be done on determinants of any size. This means that a 4x4 determinant can be evaluated using the evaluations of four 3x3 determinants.



Section 2:   The Adjoint of a Square Matrix

In this section the idea of a cofactor is introduced. The cofactor is an entree (actkually its location) is the minor of that entree with the location sign attached to the minor.
Using cofactors instead of minors theorem [1.5] can be restated more simply:

[2.1] (Expansion by cofactors) Choose any column (or row) in a determinant, form products of all the entries in that column (or row) and their cofactors. The sum of all such products is equal to the original determinant. Notation:
Column expansions:
α1 cofactor(α1) + α2 cofactor(α2) + α3 cofactor(α3) = 3x3 determinant in [1.1]
α1 cofactor(α1) + α2 cofactor(α2) + α3 cofactor(α3) = 3x3 determinant in [1.1]
α1 cofactor(α1) + α2 cofactor(α2) + α3 cofactor(α3) = 3x3 determinant in [1.1]
Row expansions:
α1 cofactor(α1) + β1 cofactor(β1) + γ1 cofactor(γ1) = 3x3 determinant in [1.1]
α2 cofactor(α2) + β2 cofactor(β2) + γ2 cofactor(γ2) = 3x3 determinant in [1.1]
α3 cofactor(α3) + β3 cofactor(β3) + γ3 cofactor(γ3) = 3x3 determinant in [1.1]

[2.2] If in the expansion along a column (or row) of a deeterminant the cofactors of entrees of another column (or row) are used, then the sum of products is equal to zero.
Notation:
Column expansions with the wrong cofactors:
α1 cofactor(β1) + α2 cofactor(β2) + α3 cofactor(β3) = 0;
α1 cofactor(γ1) + α2 cofactor(γ2) + α3 cofactor(γ3) = 0;
β1 cofactor(α1) + β2 cofactor(α2) + β3 cofactor(α3) = 0
β1 cofactor(γ1) + β2 cofactor(γ2) + β3 cofactor(γ3) = 0
γ1 cofactor(α1) + γ2 cofactor(α2) + γ3 cofactor(α3) = 0
γ1 cofactor(β1) + γ2 cofactor(β2) + γ3 cofactor(β3) = 0;
Row expansions with the wrong cofactors:
α1 cofactor(α2) + β1 cofactor(β2) + γ1 cofactor(γ2) = 0
α1 cofactor(α3) + β1 cofactor(β3) + γ1 cofactor(γ3) = 0
α2 cofactor(α1) + β2 cofactor(β1) + γ2 cofactor(γ1) = 0
α2 cofactor(α3) + β2 cofactor(β3) + γ2 cofactor(γ3) = 0
α3 cofactor(α2) + β3 cofactor(β2) + γ3 cofactor(γ2) = 0

Each of these equations involve expressions in αs, βs and γs, which cancel out each other to give zero. Another form of a discussion is that each of these expressions is an expansion of a 3x3 determinant with two identical columns (or rows) and the expansion is along one of those columns (or rows). Click here to see a more detailed explanation.

The following is a definition for the adjoint by giving the plan for its construction:

[2.3] (Adjoint) To obtain the adjoint of a square matrix, replace every entree by its cofactor and take the transpose of the resulting matrix.
Notation:

Example:

Here
cofactor(1) = (-5)(9) - (-8)(6) = 3,   cofactor(2) = -[(-4)(9) - (7)(6)] = 78,   cofactor(3) = (-4)(-8) - (7)(-5) = 67
cofactor(-4) = -[(2)(9) - (-8)(3)] = -42,   cofactor(-5) = (1)(9) - (7)(3),   cofactor(6) = -[(1)(-8) - (7)(2)] = 22
cofactor(7) = (2)(6) - (-5)(3) = 27,   cofactor(-8) = -[(1)(6) - (-4)(3)] = -18,   cofactor(9) = (1)(-5) - (-4)(2) = 3

[2.4] (Some properties of the adjoint) Let M1 and M2 be square matrices of the same size. Then    (a) The adjoint of the identity matrix is itself: adj(I) = I;
   (b) The adjoint of a product of two square matrices of the same size = product of their adjoints in the reverse order: adj (M1 M2) = (adj M2)(adj M1)

The proof of (a) is trivial. There is a proof of (b) but is not given here. The proof uses [3.1] in the next nextion.



Section 3:   Inverses of square matrices

The following is an important addition to the list of properties of adjoints, given in [2.4]:

[3.1] (Product of matrices and their adjoints) The product of any square matrix and its adjoint is equal to the product of the determinant of the matrix and the identity matrix. Similarly, the product of the adjoint and the matrix is equal to product of the determinant of the matrix and the identity matrix.
Notation: M(adj M) = (det M)I   (adj M)M = (det M)I

Example:

Recall that det M is a number and that I is a square matrix. For a 3x3 matrix the right sides of the equations have the following form:

If det M is not zero then M is non-singular. In that case, both sides of the equations   M(adj M) = (det M)I   and (adj M)M = (det M)I   may be divided by   det M   to get
M(adj M)/det M = I   and   [(adj)M/det M] M = (det M)I
But by definition the inverse M-1of a square matrix M satisfies both the equations:   MM-1 = I and M-1M = I,   where I is the identity matrix of the same size as M. Therefore,

[3.2a] (Inverse of a square matrix) The inverse of a non-singular square matrix is obtained by dividing the adjoint of that matrix by its determinant.
Notation: if M is a non-singular square matrix then   (adj M)/det M   is the inverse of M

[3.2b] (Inverse of a square matrix) A square matrix has an inverse if and only if the matrix is non-singular.
Notation: M-1   exists if and only if   det M   is not zero.

That the collection of all non-singular nxn matrices forms a multiplicative group has been discussed for n=2 and n=3. For nonsingular matrices of the same size, their product is non-singular and the inverse of the product is equal to the product of the inverses in the reverse order:

(M1M2)-1 = (M2)-1(M1)-1.
This is true for any non-commutative multiplicative group.

There is another method to find the inverse of a non-singular nxn matrix without using the adjoint, but using n2 linear equations instead.
Example: to find the inverse of the matrix M given just after [3.1]. Let X be the unknown inverse. Then by definition of inverse:

MX = I
X must be a 3x3 matrix with unknown entries   x1, x2, x3, x4, x5, x6, x7, x8, x9:
Multiplying out the two matrices on the left side of the equation and equating the nine entries in that product with the entries of the identity matrix on the right side of the equation produce the following nine equations in nine unknowns:
 x1 + 2x4 + 3x7 = 1,         x2 + 2x5 + 3x8 = 0,         x3 + 2x6 + 3x9 = 0,
-4x1 - 5x4 + 6x7 = 0,      -4x2 - 5x5 + 6x8 = 1,      -4x3 - 5x6 + 6x9 = 0,
7x1 - 8x4 + 9x7 = 0,        7x2 - 8x5 + 9x8 = 0,        7x3 - 8x6 + 9x9 = 1
Solving these equations for the x's:
x1 = 3/360,    x2 = -42/360     x3 = 27/360,
x4 = 78/360,    x5 = -12/360     x6 = -18/360,
x7 = 67/360,    x8 = 22/360     x9 = 3/360,